5,905 research outputs found

    Neural network learning dynamics in a path integral framework

    Full text link
    A path-integral formalism is proposed for studying the dynamical evolution in time of patterns in an artificial neural network in the presence of noise. An effective cost function is constructed which determines the unique global minimum of the neural network system. The perturbative method discussed also provides a way for determining the storage capacity of the network.Comment: 12 page

    Vibrational energy transfer in ultracold molecule - molecule collisions

    Full text link
    We present a rigorous study of vibrational relaxation in p-H2 + p-H2 collisions at cold and ultracold temperatures and identify an efficient mechanism of ro-vibrational energy transfer. If the colliding molecules are in different rotational and vibrational levels, the internal energy may be transferred between the molecules through an extremely state-selective process involving simultaneous conservation of internal energy and total rotational angular momentum. The same transition in collisions of distinguishable molecules corresponds to the rotational energy transfer from one vibrational state of the colliding molecules to another.Comment: 4 pages, 4 figure

    Statistical guarantees for the EM algorithm: From population to sample-based analysis

    Full text link
    We develop a general framework for proving rigorous guarantees on the performance of the EM algorithm and a variant known as gradient EM. Our analysis is divided into two parts: a treatment of these algorithms at the population level (in the limit of infinite data), followed by results that apply to updates based on a finite set of samples. First, we characterize the domain of attraction of any global maximizer of the population likelihood. This characterization is based on a novel view of the EM updates as a perturbed form of likelihood ascent, or in parallel, of the gradient EM updates as a perturbed form of standard gradient ascent. Leveraging this characterization, we then provide non-asymptotic guarantees on the EM and gradient EM algorithms when applied to a finite set of samples. We develop consequences of our general theory for three canonical examples of incomplete-data problems: mixture of Gaussians, mixture of regressions, and linear regression with covariates missing completely at random. In each case, our theory guarantees that with a suitable initialization, a relatively small number of EM (or gradient EM) steps will yield (with high probability) an estimate that is within statistical error of the MLE. We provide simulations to confirm this theoretically predicted behavior
    • …
    corecore